Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Abstract Computational inverse problems utilize a finite number of measurements to infer a discrete approximation of the unknown parameter function. With motivation from the setting of PDE-based optimization, we study the unique reconstruction of discretized inverse problems by examining the positivity of the Hessian matrix. What is the reconstruction power of a fixed number of data observations? How many parameters can one reconstruct? Here we describe a probabilistic approach, and spell out the interplay of the observation size (r) and the number of parameters to be uniquely identified (m). The technical pillar here is the random sketching strategy, in which the matrix concentration inequality and sampling theory are largely employed. By analyzing a randomly subsampled Hessian matrix, we attain a well-conditioned reconstruction problem with high probability. Our main theory is validated in numerical experiments, using an elliptic inverse problem as an example.more » « lessFree, publicly-accessible full text available April 2, 2026
-
Abstract With nested grids or related approaches, it is known that numerical artifacts can be generated at the interface of mesh refinement. Most of the existing methods of minimizing these artifacts are either problem‐dependent or numerical methods‐dependent. In this paper, we propose a universal predictor‐corrector approach to minimize these artifacts. By its construction, the approach can be applied to a wide class of models and numerical methods without modifying the existing methods but instead incorporating an additional step. The idea is to use an additional grid setup with a refinement interface at a different location, and then to correct the predicted state near the refinement interface by using information from the other grid setup. We give some analysis for our method in the setting of a one‐dimensional advection equation, showing that the key to the success of the method depends on an optimized way of choosing the weight functions, which determine the strength of the corrector at a certain location. Furthermore, the method is also tested in more general settings by numerical experiments, including shallow water equations, multi‐dimensional problems, and a variety of underlying numerical methods including finite difference/finite volume and spectral element. Numerical tests suggest the effectiveness of the method on reducing numerical artifacts due to mesh refinement.more » « less
-
Abstract The inverse problem for radiative transfer is important in many applications, such as optical tomography and remote sensing. Major challenges include large memory requirements and computational expense, which arise from high-dimensionality and the need for iterations in solving the inverse problem. Here, to alleviate these issues, we propose adaptive-mesh inversion: a goal-orientedhp-adaptive mesh refinement method for solving inverse radiative transfer problems. One novel aspect here is that the two optimizations (one for inversion, and one for mesh adaptivity) are treated simultaneously and blended together. By exploiting the connection between duality-based mesh adaptivity and adjoint-based inversion techniques, we propose a goal-oriented error estimator, which is cheap to compute, and can efficiently guide the mesh-refinement to numerically solve the inverse problem. We use discontinuous Galerkin spectral element methods to discretize the forward and the adjoint problems. Then, based on the goal-oriented error estimator, we propose anhp-adaptive algorithm to refine the meshes. Numerical experiments are presented at the end and show convergence speed-up and reduced memory occupation by the goal-oriented mesh adaptive method.more » « less
-
In free-space optical communications and other applications, it is desirable to design optical beams that have reduced or even minimal scintillation. However, the optimization problem for minimizing scintillation is challenging, and few optimal solutions have been found. Here we investigate the general optimization problem of minimizing scintillation and formulate it as a convex optimization problem. An analytical solution is found and demonstrates that a beam that minimizes scintillation is incoherent light (i.e., spatially uncorrelated). Furthermore, numerical solutions show that beams minimizing scintillation give very low intensity at the receiver. To counteract this effect, we study a new convex cost function that balances both scintillation and intensity. We show through numerical experiments that the minimizers of this cost function reduce scintillation while preserving a significantly higher level of intensity at the receiver.more » « less
-
The quasi-geostrophic (QG) equations play a crucial role in our understanding of atmospheric and oceanic fluid dynamics. Nevertheless, the traditional QG equations describe ‘dry’ dynamics that do not account for moisture and clouds. To move beyond the dry setting, precipitating QG (PQG) equations have been derived recently using formal asymptotics. Here, we investigate whether the moist Boussinesq equations with phase changes will converge to the PQG equations. A priori , it is possible that the nonlinearity at the phase interface (cloud edge) may complicate convergence. A numerical investigation of convergence or non-convergence is presented here. The numerical simulations consider cases of ϵ = 0.1 , 0.01 and 0.001, where ϵ is proportional to the Rossby and Froude numbers. In the numerical simulations, the magnitude of vertical velocity w (or other measures of imbalance and inertio-gravity waves) is seen to be approximately proportional to ϵ as ϵ decreases, which suggests convergence to PQG dynamics. These measures are quantified at a fixed time T that is O ( 1 ) , and the numerical data also suggests the possibility of convergence at later times. This article is part of the theme issue ‘Mathematical problems in physical fluid dynamics (part 2)’.more » « less
-
Abstract Potential vorticity (PV) is one of the most important quantities in atmospheric science. In the absence of dissipative processes, the PV of each fluid parcel is known to be conserved, for a dry atmosphere. However, a parcel's PV is not conserved if clouds or phase changes of water occur. Recently, PV conservation laws were derived for a cloudy atmosphere, where each parcel's PV is not conserved but parcel‐integrated PV is conserved, for integrals over certain volumes that move with the flow. Hence a variety of different statements are now possible for moist PV conservation and non‐conservation, and in comparison to the case of a dry atmosphere, the situation for moist PV is more complex. Here, in light of this complexity, several different definitions of moist PV are compared for a cloudy atmosphere. Numerical simulations are shown for a rising thermal, both before and after the formation of a cloud. These simulations include the first computational illustration of the parcel‐integrated, moist PV conservation laws. The comparisons, both theoretical and numerical, serve to clarify and highlight the different statements of conservation and non‐conservation that arise for different definitions of moist PV.more » « less
-
When an optical beam propagates through a turbulent medium such as the atmosphere or ocean, the beam will become distorted. It is then natural to seek the best or optimal beam that is distorted least, under some metric such as intensity or scintillation. We seek to maximize the light intensity at the receiver using the paraxial wave equation with weak-fluctuation as the model. In contrast to classical results that typically confine original laser beams to be from a special class, we allow the beam to be general, which leads to an eigenvalue problem of a large-sized matrix with each entry being a multi-dimensional integral. This is an expensive and sometimes infeasible computational task in many practically reasonable settings. To overcome this expense, in a change from past calculations of optimal beams, we transform the calculation from physical space to Fourier space. Since the structure of the turbulence is commonly described in Fourier space, the computational cost is significantly reduced. This also allows us to incorporate some optional turbulence assumptions, such as homogeneous-statistics assumption, small-length-scale cutoff assumption, and Markov assumption, to further reduce the dimension of the numerical integral. The proposed methods provide a computational strategy that is numerically feasible, and results are demonstrated in several numerical examples. These results provide further evidence that special beams can be defined to have beam divergence that is small.more » « less
An official website of the United States government
